perm filename ITT.3[ITT,WD] blob
sn#203695 filedate 1976-02-21 generic text, type T, neo UTF8
\|\\M0basl30;\M2germ35;\M9fix20;\←=90;\ε\`'10004;\F0\;
3. Problem Statements
\J This section defines a number of cryptographic problems. These problems
can be divided into those concerned with privacy and those concerned with
authentication. The \{privacy problem\}=2;=2; is to prevent extraction of
information by unauthorized persons from messages transmitted over a public
channel. The \{authentication problem\}=2;=2; is to prevent the injection of
messages by unauthorized persons into a public channel. A channel is considered
public if its level of security is inadequate for the needs of its users. Thus
a channel such as a telephone line may be considered private by some users and
public by others. Similarly, it is possible for a channel to be
threatened with eavesdropping or injection or both, depending on its use.
Returning to the telephone line example, the threat of injection is always
present since the called party cannot determine from which phone he is being
called and the cost of such injection is minimal. The threat of eavesdropping
through use of a wiretap is also present, but the cost and potential liability
of tapping make this threat negligible when the economic value of the
information involved is small.
There is another threat which will arise in several authentication
problems. This is the \{threat\}=2;=2; \{of\}=2;=2; \{compromise\}=2;=2; \{of
the receiver's\}=2;=2; \{authentication data\}=2;=2;. This threat is motivated
by the situation in multiuser networks where the receiver is often the system
itself. The reciever's password tables
and other authentication data are then more
vulnerable to theft than those of the transmitter(an individual user).
As seen later, techniques for protecting against this threat
also protect against the \{threat of dispute\}=2;=2;.
That is a message may be sent, but this is later denied by either the transmitter
or the receiver. Or, it may be alleged by either party that a message was sent
when in fact none was. We need a form of unforgable digital receipt.
For example, a dishonest stockbroker may try to
cover up unauthorized buying and selling for personal gain by forging orders
from clients. Or a client may disclaim an order actually authorized by him,
but which is later seen to cause a loss. We will introduce concepts which
allow the receiver to verify the authenticity of a message, but
prevent him from generating apparently authentic messages, thereby
protecting against both the threat of compromise of the receiver's
authentication data and the threat of dispute.
Having divided our problems into those of privacy and authentication we
will sometimes further subdivide authentication into
message authentication, which is the
problem defined above, and user authentication, in which the only task of the
system is to verify that an individual is who he claims to be. For example, the
identity of an individual who presents a credit card must be verified, but there
is no message which he wishes to transmit.
In spite of this seeming lack of a message in user authentication, the
two problems are largely equivalent. In user authentication there is an
implicit message "I AM USER X.", while in message authentication we are really
just verifying the identity of the party sending us messages.
Differences in the threat environments and other aspects of these two
subproblems sometimes make it convenient to distinguish between them.
In particular, a message authentication system is almost always faced with the
threat of eavesdropping, whereas many user authentication systems neglect
this threat. For example, password login procedure ured in most computer sytems
is rendered useless by an eavesdropper. The designers of these systems feel that
wiretapping is not a major threat. We will treat message authentication as
the primary problem and indicate where user authentication differs
significantly in its characteristics.
With these preliminary definitions completed we now turn to problem
statments.
\{Problem 1\}=2;=2;:
The classical \{privacy problem\}=2;=2; bas been discussed in section 2,
together with three threat models correspoding to statistical,
known plaintext, and
chosen plaintext attacks. We should reiterate that in this paper we are
concerned solely with computational security and therefore with a known
plaintext attack or if possible, a chosen plaintext attack.
\{Status\}=2;=2;:
One time pad systems can be shown to be secure against any attack[].
These, however, require too much key for most applications and will recieve
scant attention in this paper.
At present, no systems exist which have been proved computationally
secure. Many systems in use, however, have resisted intensive cryptanalytic
attack and are strongly thought to be computationally secure.
As discussed in section 5, we expect continuing developements in
computer science and information theory to provide such proofs before the
end of the century. This will revolutionize the certification process;
replacing the paradigm of certification by cryptanalysis which has stood
since the late renaissance.
\{Problem 2\}=2;=2;:
Classical \{message authenticaion\}=2;=2; is like a digital signature,
and allows the receipient of a message to verify its origin. If, like a
signature, the authentication sequence is constant from message to message there
is no protection against an eavesdropper forging messages of his choosing. Such
a system is primarily useful in \{one-time authentication\}=2;=2;, where all
possible receipients know when the authenticator is first used and do not accept
any later new messages in that authenticator.
If there is no threat
of eavesdropping (e.g., in certain user authentication systems),
"one-time" authentication can be used on many messages.
In \{multiple message authentication\}=2;=2; the authenticator is
message dependent in a way which makes it computationally infeasible
for an eavesdropper to
authenticate a message of his choosing from
knowledge of past authenticated messages.
\{Status\}=2;=2;:
One-time authentication is easily accomplished by appending a random b
bit authenticator to the end of the message. A forger has
probability 2\∩-b\⊗ of being successful if he has not seen the authenticator.
If b=20 the authenticator requires only three ASCII characters and has
probability 1 in a million of being successfully forged.
When used for user authentication, this is the usual password login scheme
used in most computers.
Multiple message authentication is easily achieved by appending a b
bit sequence, either fixed or key dependent, to the end of the message and
encrypting the entire sequence with the key []. This technique uses the same key
to provide both privacy and authentication. The receiver verifies the authenticity
of the message by checking for the authenticator at the end of the deciphered
message. An eavesdropper cannot determine the authentication sequence and
forge messages unless he breaks the cryptosystem.
It is necessary for the cryptosystem to operate on the message and
authenticator as a block, and that changing even a few bits in the ciphertext
block causes many widely disbersed bits to change in the plaintext block.
Otherwise an opponent could change a few, early bits in the cryptogram and hope
that these changes do not propagate into the authenticator.
The user a authentication equivalent of multiple message authentication is
the IFF problem introduced in the last section. As noted there, the challenger
issues a time varying challenge which is enciphered by the user under a user
specific key to obtain a response. This response convinces the challenger that
the user posesses the key, without divulging the key to an eavesdropper.
The difference between this technique and that suggested for multiple
message authentication is due to the different parameters involved. Multiple
message authentication could be done in a manner equivalent to that used by an
IFF system. The message would be transmitted in both plaintext and ciphertext
forms. The plaintext corresponds to the challenge and the ciphertext corresponds
to the response in an IFF system. This technique doubles the transmitted data,
and for messages over 50 bits in length is therefore much less efficient
than the original technique suggested for multiple message authentication.
Some thought will show that the original multiple message technique can
also be used in an IFF system but is less efficient than the usual IFF
technique.
\{Problem 3\}=2;=2;:
\{One-way message authentication\}=2;=2; allows the receipient of a
message to verify its authenticity but prevents him from forging properly
authenticated messages. As noted earlier, this protects against the threat of
compromise of the receiver's authentication data, and sometimes
against the threat of
dispute.
Since the receiver's authentication tables can be public, in a network
the transmitter could broadcast his message to everyone, and although all other
users can verify the authenticity of the transmission, no one but the legitimate
transmitter can send valid messages. If used this way,
oneway message authentication
yields a \{broadcast cipher\}=2;=2; in which everyone knows how to decipher
cryptograms, but only one user knowns how to encipher messages. Here,
cryptography is not used for privacy since everyone knows how to "decipher".
Rather, it allows an ability to verify authenticity without an ability to
originate authenticators. The dual problem of \{multiple access ciphers\}=2;=2;
is treated in the statement of problem #5.
As in classical message authentication, we find that one-way message
authentiction has one-time and multiple message subproblems. There is now also a
third subproblem which we call the \{single message problem\}=2;=2;. In
onetime oneway authentication we do not have any protection against an
eavesdropper forging arbitrary messages after we send our first message,
although now not even the receiver can forge a message prior to that time.
There is thus very limited protection against disputes.
In
\{single message one-way authentication\}=2;=2; only one message is sent. But
even after it is sent the receiver or an eavesdropping thief (i.e., an
eavesdropper who also has stolen the receiver's authentication information)
cannot forge a message other than the one we sent. By including the time as part
of the message even this forgery could be detected.
There is now greater protection against disputes.
In \{multiple\}=2;=2;
\{message one-way authentication\}=2;=2; we are able to send many messages in
the same authentication key, and no others can be properly forged by either the
receiver or an eavesdropping thief.
Now essentially complete protection against disputes is possible. The reciever
can prove that he recieved a specific message from a specific transmitter.
And,by requesting a oneway authenticated reciept, including statement of the
message, the transmitter can prove that a specific reciever recieved his message.
We now define several closely related \{oneway user\}=2;=2;
\{authentication\}=2;=2;
problems, in which theft of the system's authentication or password data does
not compromise the system. If there is no threat from eavesdropping and the
only threat is theft of the system's static password directory, we have the
\{oneway password problem\}=2;=2; introduced in section 1 for computer logon.
As noted there, applying a oneway function to the password allows solution of
this problem. Since this problem is equivalent to the onetime, oneway
message authentication problem defined above, oneway functions can also be used
to solve that prblem. There, the bit sequence f(PW) is known to the receiver
and the bit sequence PW is appended as an authenticator to the one message sent.
There is a similar, although less interesting equivalent in user
authentication to the single message one-way authentication problem. Here the
user can respond to one system challenge (e.g. "What is the date and time?
Authenticate your answer.") and yet the system cannot forge a new login even if
it remembers the previous exchange.
The equivalent of multiple message one-way authentication is extremely
interesting since it allows continual logins even under the combined threats
of both eavesdropping and compromise of system
authentication tables. This is the \{one-way IFF problem\}=2;=2; mentioned in
section 1. The challenge (implicit or actually sent) could again be to
authenticate the date and time. It is seen that a multiple message one-way
authentication system could thus provide a one-way IFF system, and some thought
will show that the converse is also true.
Since the receiver's authentication data can be public in a one-way IFF
system, any user could authenticate the identity of any of any other user,
either prior to or during a conversation. This would eliminate the need for
secure distribution of authetication data for user-to-user communications and
would result in a \{public key authentication system\}=2;=2;.
\{Status\}=2;=2;:
The solution to the one-way password problem, and therefore the
onetime, oneway message authentication problem is through use of oneway
functions. The status of these problems thus reduces to that of oneway
functions. While there are currently no provably oneway functions, they
undoubtedly exist and appear easy to design. Oneway functions are so basic to
cryptography that section 4 is entirely devoted to a discussion of them and
their status. As shown there, a cryptosystem which is secure against a known
plaintext attack can be turned into a oneway function. Since such
cryptosystems appear to exist, so do oneway functions.
The single message oneway authentication problem and its user
authentication analog have a partial solution which was suggested to the authors
by Leslie Lamport of Massachusetts Computer Associates. This technique requires
the existence of a one-way function f mapping k-dimensional binary space into
itself. We anticipate that k would be on the order of 100. If the transmitter
wishes to send an N bit message he generates 2N, randomly
chosen, k-dimensional binary
vectors x\∪\f91\⊗, X\∪\f91\⊗, x\∪\f92\⊗, X\∪\f92\⊗, ... , x\∪\f9N\⊗, X\∪\f9N\⊗
which he keeps secret. The receiver is given the corresponding images under f,
namely
y\∪\f91\⊗, Y\∪\f91\⊗, y\∪\f92\⊗, Y\∪\f92\⊗, ... , y\∪\f9N\⊗, Y\∪\f9N\⊗. Later,
when the message _m_ = (m\∪1\⊗, m\∪2\⊗, ... m\∪N\⊗) is to be sent, the transmitter
sends x\∪1\⊗ or X\∪1\⊗ depending on whether m\∪1\⊗ = 0 or 1. He sends x\∪2\⊗ or
X\∪2\⊗ depending on whether m\∪2\⊗ = 0 or 1, etc. The receiver operates with f
on the first received block and sees whether it yields y\∪1\⊗ or Y\∪1\⊗ as its
image and thus learns whether it was x\∪1\⊗ or X\∪1\⊗, and whether m\∪1\⊗ = 0 or
1. In a similar manner the receiver is able to determine m\∪2\⊗, m\∪3\⊗, ...
m\∪\f9N\⊗. But the receiver is incapable of forging a change in even one bit of
\{m\}=2;=2;.
This is only a partial solution because of the k-fold data expansion
required, and because of the large value of k (approximately 100) required.
There is, however, a modification which eliminates the expansion problem when N
is on the order of a megabit or larger. Let g be a one-way mapping from binary
N-space to binary n-space where n is approximately 50. Take the N bit message
\{m\}=2;=2; and operate on it with g to obtain the n bit vector \{\f2m\}=2;=2;.
Then use the previous scheme to send \{\f2m\}=2;=2;. If N = 10\∩6\⊗, n = 50.
and k = 100 this adds kn = 5000 authentication bits to the message. It thus
entails only a 5% data exapnsion during transmission (or 15% if the initial
exchange of y\∪1\⊗, Y\∪1\⊗, ... , y\∪n\⊗, Y\∪n\⊗ is included). And, even though
there are a large number of other messages (2\∩N-n\⊗ on the average) with the
same authentication sequence, the onewayness of g makes them
computationally infeasible to find
and thus to forge. Actually we need g to be somewhat stronger than a normal
one-way function, since our opponent has not only \{\f2m\}=2;=2;, but also one
of its inverse images \{m\}=2;=2;. It must be hard even given \{m\}=2;=2; to
find another, different inverse image of \{\f2m\}=2;=2;. Finding such functions
again appears to offer little trouble.
The oneway IFF and multiple message one-way authentication problems are
more open. If a public key cryptosystem can be found (see
section 1 and problem #4 below)
then it can be used to provide a solution. For example to obtain a oneway IFF
system from a public key cryptosystem the challenge (e.g. the date and time)
could be operated on by a user using his secret \{deciphering\}=2;=2; key
to obtain the response. The
challenger enciphers this response using the public enciphering key of the
claimed user. If the original
challenge is obtained, the responder must have known the
secret deciphering key of, and thus be, the claimed user. Since a fairly strong
argument is made in favor of the existence of public key cryptosystems, by
induction a strong argument is made in favor of one-way IFF systems. The
converse is not true, and development of a one-way IFF system would not
necessarily imply the development of a public key cryptosystem.
Until public key cryptosystems are developed there is another allbeit
partial, solution to the one-way IFF problem. The user generates a password X
which he keeps secret. He gives the system f\∩N\⊗(X) where f is a one-way
function. At time t the appropriate response to a challenge is f\∩N-t\⊗(X),
which can be checked by the system by applying f\∩t\⊗(X). Because of the
onewayness of f, past responses are of no value in forging a new response.
The problem with this solution is that it can require a fair amount of
computation for legitimate log-on (although many orders of magnitude less than
for forgery). If for example t is incremented every second and the system must
work for one month on each password then N = 2.6 million. Both the user and the
system must then iterate f an average of 1.3 million times per log-in. While
not insurmountable, this problem obviously limits use of the technique. The
problem could be overcome if a simple method for calculating f\∩(2↑n)\⊗ for n =
1,2, ... could be found, much as X\∩8\⊗ = ((X\∩2\⊗)\∩2\⊗)\∩2\⊗. For then binary
decompositions of N-t and t would allow rapid computation of f\∩N-t\⊗(.) and
f\∩t\⊗(.).
\{Problem 4\}=2;=2;:
\{Public key cryptosystems\}=2;=2; have already been briefly introduced
in section 1. In such systems each user would have a pair of inverse keys, E
and D, for enciphering and deciphering. For reasons of security, generation of
this pair is best done at the user's terminal which is assumed to have some
computational power. The user then keeps the deciphering key D secret but makes
the enciphering key E public by placing it in a central file of such keys.
Anyone can then encrypt messages and send them to the user, but no one else can
decipher messages intended for the user. As such, public key cryptosystems can
be regarded as \{multiple access ciphers\}=2;=2;.
By regularly checking the file of enciphering keys the user guards
against another user altering or replacing E. Any such mischief is reported and
settled by other authentication means, such as personal appearance.
The crucial feature of a public key system is that while it is
relatively easy to generate an E-D pair, it is computationally infeasible
to compute D from E. Ideally, it should be not only easy to generate
an E-D pair, but it should also be possible to do this completely automatically
through a publicly available transformation from a random bit string to E-D.
Since the general system in which E and D are used must also be public,
specifying E specifies a complete algorithm for transforming input messages into
output cryptograms. As such a public key system is really a set of \{trap-door
one-way functions\}=2;=2;. These are functions which are not really one-way in
that simply computed inverses exist. But given an algorithm for the forward
function it is computationally infeasible to find a simply computed
inverse. Only through knowledge of certain \{trap-door information\}=2;=2;
(e.g., the random bit string which produced the E-D pair) can one easily find D
from E.
\{Status\}=2;=2;:
At this time we have neither a proof that public key systems exist, nor
a demonstration system. We hope to have a demonstration E-D pair in the near
future, and expect that if the demonstration pair successfully resists attack
then we will be able to design an algorithm for automatically generating E-D
pairs of a similar kind.
Ultimately we hope to see provably secure public key cryptosystems.
In the meantime, the following reasoning is given to
make these concepts more plausible.
A suggestive, although unfortunately useless example is to
encipher the plaintext _m_,
represented as a binary n-vector \{m\}=2;=2;,
by multiplying it by an invertible binary nxn matrix E. The cryptogram thus
equals E\{m\}=2;=2;.
Letting D = E\∩-1\⊗ we have m = D
\{c\}=2;=2;. Thus both enciphering and deciphering require
about n\∩2\⊗ operations. Calculation of D from E, however, involves a matrix
inversion which is a harder problem. And it is at least conceptually simpler to
obtain an arbitrary pair of inverse matrices than it is to invert a given
matrix. Start with the identity matrix I and do elementary row and column
operations to obtain an arbitrary invertible matrix E. Then starting with I do
the inverses of these same elementary operations in reverse order to obtain D =
E\∩-1\⊗. The sequence of elementary operations constitutes the trap door
information and could be easily determined from a random bit string.
Unfortunately, matrix inversion takes only about n\∩3\⊗ operations even
without the trap door information. The ratio of "cryptanalytic" time (i.e.,
computing D from E) to enciphering or deciphering time is thus at most n. To
obtain ratios of 10\∩6\⊗ or greater would thus require enormous block sizes.
Also, it does not appear that knowledge of the elementary operations used to
obtain E from I greatly reduces the time for computing D. And, since there is
no round-off error in binary arithmetic, numerical stability is unimportant
in the matrix inversion. In spite of its lack of practical utility, this matix
example is still useful for clarifying the relationships necessary in a
public key system.
A more practical direction uses the above observation that we are
seeking a pair of easily computed inverse algorithms E and D, such that D
is hard to infer from E. This is not as impossible as it may sound.
Anyone who has tried to determine what operation is accomplished by someone
else's machine language program knows that E itself (i.e., what E does) can be hard to
infer from an algorithm for E. If the program were to be made purposefully
confusing through addition of unneeded variables, statements and outputs, then
determining an inverse algorithm could be made very difficult. Of course,
E must be complicated enough to prevent its identification from input-output
pairs.
Another idea appears even more promising. Suppose we start with a
schematic of a 100 bit input, 100 bit output circuit which merely is a set of
100 wires implementing the identity mapping. Then select 4 points in the
circuit at random, break these wires, and insert AND, OR and NOT gates which
implement a randomly chosen 4 bit to 4 bit invertible mapping (a 4 bit S box in
Feistel's notation []). Then repeat this insertion operation approximately 1000
times to obtain an enciphering circuit E. Knowing the sequence of operations
which led to the final E circuit allows one to easily design an inverse circuit
D. If however the gates are now randomly moved around on the schematic of E to
hide their associations into S boxes, an opponent would have great difficulty in
reconstructing the simple description of E in terms of S boxes, and therefore
in constructing a simple version of D. His task
could be further complicated by using reduction techniques (eg. Carnaugh maps)
or expansion techniques (eg. ~(AB) = ~A or ~B, or expressing a logical variable
in terms of previous variables), and by adding additional, unneeded S boxes and
outputs.
While, for ease of exposition, the above description has been in terms
of a hardware implementation, a software version is obviously of most interest.
The hardware description is also valuable in exemplifying a generally useful
idea. One can build a good public key crytposystem from elementary building
blocks if there is a general framework for describing the concatenation of these
elementary blocks. Here the elementary building blocks are S boxes and the
general framework is the schematic diagram.
A transformation expressed in the general framework must be hard to invert and
must therefore
hide the sequence of elementary building blocks so that no one other than the
designer can easily implement the sequence of inverse elementary operations. A
little thought will show that the matix example had a similar structure, except
there the general class of transformations obtainable was too easy to invert.
Merkle[] has independently studied the problem of distributing keys
over an insecure channel. His approach is different from the public key
cryptosystems suggested above and he has the distinction of having a solution whose
cryptanalytic cost grows as n↑2 where n is the cost to the legitimate users.
While the rations n↑2 : n obrainable are not large enough for high grade
cryptography improvements may be possible.
\{Problem 6\}=2;=2;:
\{Trap doors\}=2;=2; have already been seen in the previous problem in
the form of \{trap\}=2;=2; \{door\}=2;=2; \{one-way\}=2;=2; \{functions\}=2;=2;,
and other variations exist. A \{trap door cipher\}=2;=2; is one
which strongly resists cryptanalysis unless one is in possession of certain,
secret \{trap door information\}=2;=2; used in the design of the cipher. This
allows the designer to easily break the system after he has sold it to a client
and yet to falsely maintain his reputation as a builder of secure systems. It
is important to note that it is not greater cleverness or knowledge of
cryptography which allows the designer to do what others cannot. If he were to
lose the trap door information he would be no better off than anyone else. The
situation is precisely analogous to that of a combination lock. As long as I
remember the combination, I can do in seconds what even a skilled locksmith
would require hours to accomplish. And yet if I forget the combination I have
no advantage.
By definition, we will require that a trap door problem be one in which
is is computationally feasible to devise the trap door. This leaves room for
yet a third type of entity for which we shall use the prefix "quasi." For
example a _quasi oneway function is not oneway in that an easily computed inverse
exists. However, it is computationally infeasible even for the designer, to
find the easily computed inverse. Therefore a quasi oneway function can be used
in place of a oneway function with no loss in security.
Loosing the trap door information to the trap door oneway function
makes it into a quasi oneway function. There may also be oneway functions which
are not obtainable in this manner.
It is entirely a matter of definition that quasi oneway functions are
excluded from the class of oneway functions. One could instead talk of oneway
functions in the wide sense or in the strict sense.
Similarly, a quasi secure cipher is a cipher which will successfully
resist cryptanalysis, even by its designer, and yet for which there exists
an easily computed inverse (which is of course computationally infeasible to
find). Again, from a practical point of view, there is essentially no difference
between a quasi secure cipher and a secure one.
\{Status\}=2;=2;:
We currently have little basis for supporting the existence of trap door
ciphers. However they are a distinct possibility and should be remembered when
accepting a cryptosystem from a possible opponent. For example, should the IRS
use a cryptosystem, such as the one proposed by NBS [], which was largely chosen
by the intelligence community?
On another level, if trap door ciphers could be found, they could be
turned into public key cryptosystems by having the transmitter choose a key at
random, send an arbitrary plaintext-cryptogram pair and then the actual
cryptogram. The intended receipient, who made the trap door cipher public but
kept the trap door information secret, could use the known plaintext -
cryptogram pair to easily sove for the key and then decipher the actually
cryptogram. Yet anyone else, even though he knew the public cipher, could not
obtain the information.
We have already seen that public key cryptosystems imply the existence
of trap door one-way functions. The evidence presented to support public key
systems therefore also supports the existence of trap door, one-way functions.
However the converse is not true. For a trap door one-way
function to be usable as a public key cryptosystem it must be
invertible (i.e., have a unique inverse).
A related trap door problem, which is mostly of interest in establishing
that trap doors can exist and in thereby supporting the possibility of trap door
ciphers and one-way functions, is the \{trap door question\}=2;=2;. These are
questions which are essentially impossible for anyone to answer, with the
exception of the person asking the question. He has certain secret, trap door
information which allows him both to easily answer the question, and to easily
convince others of the correctness of his answer. Again, it is only the trap
information, not greater training, which permits this.
Experience with poorly designed quiz problems tells us that trap door
questions are fairly easy to devise. At a somewhat more serious level, the
existence of one-way functions would imply the existence of trap door questions.
The person asking the question would choose an X at random from the domain,
operate with f to obtain Y = f(X), and ask "What is the inverse image of Y?".
The answer, X, to the question is also the trap door information.
\{Problem 7\}=2;=2;.
\{Quasi trap doors\} are closely related toa trap doors, the only
difference being that the drap doors need not be easily devised. For example, a
\{quasi trap door one-way function\}=2;=2; is a function for which a simply
computed inverse exists, but given an algorithm for computing the forward
function it is computationally impossible to find a simple computed inverse.
Comparison with the definition of a trap door one-way function shows that the
only difference is that we have dropped the requirement that it must be possible
to devise the trap door within a reasonable amount of computation. There is also
a difference in intent which is difficult to precisely define. a trap door is
intentional, whereas a quasi trap door is not. However, the quasi trap door is
not due to poor design. That is, if I design a quasi trap door one-way function
I need not fear that anyone will find the trap door, because finding the trap
door would lead to finding an easily computed inverse which, by definition, is
computationally impossible.
It thus follows that a quasi trap door one way function effecively is a
one-way function and entails no loss in security. The only differece between
the two is that in the former case an astronomical precomputation ability would
reduce the computational requirements of the inverse function. Since no one
will even have such an ability there is no effective difference. If we were to
change our definition of a one-way function to include precomputation time in
measuring the difficulty of inverting f(.) then quasi trap door one way function
would be a subclass of one-way functions. However, we excluded precomputation
time and insisted that all algorithms which implement f\∩-1\⊗(.) take impossibly
long to run, thus excluding quasi trap door functions from also being "truly"
one-way. this was largely a matter of conveience and if later experience should
suggest changing this convention there is no reason not to.
A quasi trap door cipher is defined in an entirely analogous manner.
Since in this paper we have not included precomputation time in measuring the
difficulty of cryptanalysis, quasi trap door ciphers are not computationally
secure by our definition. Again, this is an arbitrary point in the definition
and could be changed with no real effect.
\{Status\}=2;=2;:
There are two possible ways a quasi trap door could result. The first,
which provides the real motivation for our definition, is that a supposedly
secure cipher or one-way function may possess a quasi trap door. As noted
above, this would not diminish its usefulness. The second is that we could take
a trap door one-way function or cipher and destroy the trap door information to
make a quasi trap door entity.\.